3 research outputs found

    Do you see what I mean? The role of visual speech information in lexical representations

    Get PDF
    Human speech is necessarily multimodal and audiovisual redundancies in speech may play a vital role in speech perception across the lifespan. The majority of previous studies have focused particularly on how language is learned from auditory input, but the way in which audiovisual speech information is perceived and comprehended remains less well understood. Here, I examine how audiovisual and visual-only speech information is represented for known words, and if intersensory processing efficiency ability predicts the strength of the lexical representation. To explore the relationship between intersensory processing ability (indexed by matching temporally synchronous auditory and visual stimulation) and the strength of lexical representations, adult subjects participated in an audiovisual word recognition task and the Intersensory Processing Efficiency Protocol (IPEP). Participants were able to reliably identify a correct referent object across manipulations of modality (audiovisual vs visual-only) and pronunciation (correctly vs mispronounced). Correlational analyses did not reveal any relationship between processing efficiency and visual speech information in lexical representations. However, the results presented here suggest that adults’ lexical representations robustly include visual speech information and that visual speech information is sublexically processed during speech perception

    Functional annotation of human long noncoding RNAs via molecular phenotyping

    Get PDF
    Long noncoding RNAs (lncRNAs) constitute the majority of transcripts in the mammalian genomes, and yet, their functions remain largely unknown. As part of the FANTOM6 project, we systematically knocked down the expression of 285 lncRNAs in human dermal fibroblasts and quantified cellular growth, morphological changes, and transcriptomic responses using Capped Analysis of Gene Expression (CAGE). Antisense oligonucleotides targeting the same lncRNAs exhibited global concordance, and the molecular phenotype, measured by CAGE, recapitulated the observed cellular phenotypes while providing additional insights on the affected genes and pathways. Here, we disseminate the largest-todate lncRNA knockdown data set with molecular phenotyping (over 1000 CAGE deep-sequencing libraries) for further exploration and highlight functional roles for ZNF213-AS1 and lnc-KHDC3L-2.Peer reviewe

    Audiovisual speech processing: Implications for speech perception and language development

    No full text
    This dissertation aims to empirically assess the complex, multileveled relationships between audiovisual speech perception and early language development. The majority of extant language development research has justifiably focused on infants’ ability to learn language from auditory input, and indeed, infants are precocious auditory learners (Saffran & Kirkham, 2018). Complementary to auditory speech, however, are the necessarily redundant facial movements used to articulate speech. Outside of language development research, multimodal processing has been theorized to facilitate perceptual learning and cognitive development (Bahrick & Lickliter, 2000), but only a small number of empirical studies have investigated how audiovisual speech perception in infancy is related to later language development. Infants demonstrate sensitivity to audiovisual speech early in development (Patterson & Werker, 2003), however, it is still an open question as to how infants may functionally use these redundancies in the service of language learning. This dissertation first examines how infants integrate auditory and visual speech at 4.5-months by measuring their perception of the McGurk effect. To augment this behavioral work, a meta-analysis assesses infants’ perception of the McGurk Effect across available published and unpublished literature. A dynamic neural field (DNF) computational model then explores the real-time perceptual and neuro-cognitive processes that lead to the perception of the McGurk Effect. Next, to explore how audiovisual speech perception in infants relates to later language outcomes, longitudinal relationships between infants’ perception of the McGurk Effect, their native and non-native phoneme discrimination at 7-months, and vocabulary outcomes at 13- and 18-months are explored. Finally, the utility of audiovisual redundancies was tested for infants in a difficult object-label association laboratory task. The culmination of this dissertation creates a body of empirical and computational work focused on infants’ audiovisual speech perception at multiple timepoints across children’s bourgeoning language ability in the first two years of life
    corecore